1.4 AI-Based Cybersecurity Attacks
Topic 1.4: AI-Based Cybersecurity Attacks
Artificial intelligence (AI) has become a powerful tool for both cyber defense and cyberattacks. Adversaries are increasingly leveraging AI-powered tools to augment their attacks, making them more sophisticated, convincing, and difficult to detect.
One alarming use of AI is in impersonation. Adversaries can use AI tools to analyze publicly available voice and image samples of an individual, often scraped from social media posts, to create a highly realistic digital avatar or voice clone. This allows an attacker to convincingly impersonate someone over a phone or video call. Such attacks can be used to trick friends or relatives into sending money under false pretenses or to deceive employees into revealing sensitive corporate information. As more systems adopt voice-based authentication, the threat of AI-driven voice impersonation grows significantly.
Generative AI, such as large language models (LLMs), has also made phishing attacks far more effective. Historically, many phishing messages could be identified by poor grammar or unnatural phrasing, as they were often written by non-native speakers. Now, adversaries can use LLMs to craft perfectly worded and contextually relevant phishing emails, texts, or social media messages in any language, making them nearly indistinguishable from legitimate communications.
LLMs themselves can be a target. Adversaries can use carefully crafted prompts, a technique known as prompt injection, to trick an LLM into revealing secure or sensitive information that it has processed from other user inputs or from its vast training data. Conversely, adversaries can also engage in data poisoning by creating websites with false information or modifying existing ones. When LLMs are trained on this tainted data, they may inadvertently repeat the false information, spreading disinformation.
AI also enhances an adversary's ability to perform reconnaissance. AI-powered tools can rapidly scan the internet, including social media platforms, public records, and company websites, to gather vast amounts of information about a target individual or organization. This automated process is far faster and more comprehensive than manual methods. Additionally, AI-enhanced coding tools can assist adversaries in writing new malware, finding vulnerabilities in existing software, or modifying legitimate code to perform malicious actions.
Protecting against these AI-augmented attacks requires new layers of vigilance. One simple but effective strategy is to establish shared secrets, such as a secret word or phrase, with close friends and relatives. This allows for identity verification in high-stakes situations, defeating impersonation attempts. Enabling multifactor authentication (MFA) is another critical defense. Even if an adversary successfully clones a target's voice to bypass voice authentication, a second authentication factor, like a one-time code sent to a phone, would block their access.
Users should also exercise caution with AI tools. It is important never to enter personal or sensitive data into public AI-powered tools like chatbots, as this information could be used to train the model and potentially be extracted by an adversary later. Finally, all output from AI tools should be carefully evaluated and verified using reputable, non-AI sources. AI models can "hallucinate" or present false information as fact, so critical thinking remains an essential defense.